Non-parametric Bayesian Dictionary Learning with Landmark-Dependent Hierarchical Beta Process

نویسندگان

  • Mingyuan Zhou
  • Hongxia Yang
چکیده

There has been significant recent interest in dictionary learning and sparse coding, with applications in denoising, interpolation, feature extraction, and classification [1]–[3]. Increasingly it has been recognized that these models may be improved by imposing additional prior information, beyond sparseness. For example, a locality constraint has been used successfully in the context of feature learning and image classification [4]. Structured sparsity has been used for compressive sensing [5]. Other examples include hierarchical tree-based dictionary learning [6], submodular dictionary selection [7], and exploitation of self-similarity in images [8]. We propose a landmark-dependent hierarchical beta process to address dictionary learning for data that are endowed with an associated covariate. We explore the idea that data nearby in the covariate space (e.g., nearby in terms of temporal, spatial or cosine distances) are likely to share similar sparseness properties, and hence we employ “landmarks” to guide the usage probabilities of dictionary atoms. We address covariate-dependent dictionary learning from a Bayesian perspective. Compared to many optimization-based approaches, which assume the noise is Gaussian and its variance is known [1], our model has the advantage of automatically inferring an appropriate dictionary size and the underlying noise statistics. Our model is connected to recent research devoted to removing the exchangeability assumption of the IBP [9], [10] and BP [11]. The exchangeability assumption ignores relational information provided by covariates. A dependent IBP (dIBP) model has been introduced recently, with a hierarchical Gaussian process (GP) used to account for covariate dependence [12]. In the proposed model, rather than imposing relational information via a parametric covariance matrix, as in GP, we do so by employing a kernel-based construction. We introduce “landmarks” in the covariate space, whose positions are learned, defining local regions where the dictionary usages are likely to be similar. The normalized kernels are localized via learned “landmarks,” establishing links between data and landmark-dependent sparseness properties. The proposed model is related to the kernel stick breaking process (KSBP) [13] and Bayesian density regression (BDR) [14], although it is distinct from both. For example, the original KSBP construction focused primarily on covariate-dependent mixture models, and here we extend such ideas to a sparse factor analysis (SFA) model for covariate-dependent dictionary learning and sparse coding; we also

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

Non-parametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this non-parametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stick-breaking process are als...

متن کامل

Landmark-Dependent Hierarchical Beta Processfor Robust Sparse Factor Analysis

A computationally efficient landmark-dependent hierarchical beta process is developed as a prior for data with associated covariates. The landmarks define local regions in the covariate space where feature usages are likely to be similar. The landmark locations are learned, to which the data are linked through normalized kernels. The proposed model is well suited for local latent feature discov...

متن کامل

Bayesian Supervised Dictionary learning‎

This paper proposes a novel Bayesian method for the dictionary learning (DL) based classification using Beta-Bernoulli process. We utilize this non-parametric Bayesian technique to learn jointly the sparse codes, the dictionary, and the classifier together. Existing DL based classification approaches only offer point estimation of the dictionary, the sparse codes, and the classifier and can the...

متن کامل

Dependent nonparametric bayesian group dictionary learning for online reconstruction of dynamic MR images

In this paper, we introduce a dictionary learning based approach applied to the problem of real-time reconstruction of MR image sequences that are highly undersampled in k-space. Unlike traditional dictionary learning, our method integrates both global and patch-wise (local) sparsity information and incorporates some priori information into the reconstruction process. Moreover, we use a Depende...

متن کامل

On the Analysis of Multi-Channel Neural Spike Data

Nonparametric Bayesian methods are developed for analysis of multi-channel spike-train data, with the feature learning and spike sorting performed jointly. The feature learning and sorting are performed simultaneously across all channels. Dictionary learning is implemented via the beta-Bernoulli process, with spike sorting performed via the dynamic hierarchical Dirichlet process (dHDP), with th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009